Volume 1 - 7/25, 7/26

The implementation of the AI Action plan by the Trump Administration and its effects in Big Tech and in the eyes of Americans.

By: Mark Choi

Click to expand/collapse article

ABC news published an article by Will Steakin covering the following points on Trump's AI Action plan:

On Wednesday (7/23/2026) the White House released a series of proposals made with the purpose of increasing the United States's presence in the artificial intelligence scene, as it prioritized "domination" over regulation. It was created by the Trump administration, David Sacks, and the Office of Science and Technology policy. 24 pages and 90 federal actions were produced, expressing goals aligned with increasing private innovation on AI, increasing the scope of AI hardware, and exporting American AI. This was first introduced in January when Trump directed its creation with an executive order.

The hope to focus on AI development differs from that of the Biden administration's stance on artificial intelligence, as it focuses on winning against competitors of the US, namely China. Many are concerned with the effects the plan will have on tech industries, arguing that to proceed with the plan is to increase their influence and power. This will allow companies to prioritize profits over the people. Consumer advocates claim that untested products can be released to the public, that energy consumption will increase and energy cleanliness will decrease, and that Big Tech will "rush into the AI era without accountability to the American public."

The plan will focus on AI innovation while cutting down regulations and will focus on private-sector development for new strides in technological development, essentially allowing an all in plan with minimal supervision. His plan will require infrastructure in the forms of data centers to follow The use of fast-tracking permits for their creation will take part in "removing diversity, equity and inclusion (DEI) and climate requirements, as well as investing in AI-related workforce training.

Interestingly enough the plan mentions the removal of misinformation, DEI, and climate change from federal AI safety guidelines, all for protecting free speech and American values. This brings up ethical questions about censorship and limiting information fed into AI learning models. In addition to this, when asked about the topic of copyrighted data in the process of training artificial intelligence, a senior official only replied with "Fair use is the law of the land," which raises questions about the ethical concerns about leniency with AI learning.

The proposed plan seems sketchy in the possible power imbalance that can be produced through excessive AI development without management.

The White House provides the opinions of various representatives from the technology industry, almost all of them with similar viewpoints that are relatively positive while commenting on President Trump's action.

While many mention the possibilities that come with promoting AI including asserting US superiority in the industry, in the context of the previous article it brings up many questions. For example: AI Innovation Association President Steve Kinard stated "President Trump's AI Action Plan is a bold path to global American leadership. Every American citizen, company, university and institution has a role to play. By prioritizing American workers, free speech, and security, it positions the U.S. to win the AI race and usher in a new era of prosperity and strength. The AI Innovation Association stands ready to support this initiative." The prioritization of the people and free speech seem to be challenged in the ABC articles points on limiting the kind of information provided to artificial intelligence. In addition to this, considering the very people commenting on these are all big figures in the technology industry, it's hard to determine the authenticity of their words as they have much to gain from the Action Plan. Then again their goals could be aligned to help the people. This introduces new controversies and arguments about the quick openness to embrace the removal of regulations against the possibility of great development in technology for the U.S. The imbalances in costs and outcomes are yet to reveal themselves.

The White House also provides limits on what they see as "woke" to be fed into the artificial intelligence.

The White House and the Trump Administration state that the artificial intelligence models should be neutral and purely factual and should not pursue "ideological dogmas such as DEI," as stated by the executive order.

Interestingly the parameters of these instructions seem to be similar to that of Elon Musk's AI chatbot Grok, which called itself "MechaHitler" to some users. It made antisemitic claims and talked about contentious topics, reinforcing the need for modification and regulation of AI to some extent.

Citations

Dylan Butts. "No 'Woke AI' in Washington, Says Trump, as He Launches AI Action Plan." CNBC, 24 July 2025, www.cnbc.com/2025/07/24/no-woke-ai-in-washington-says-trump-as-he-launches-ai-action-plan.html.

The White House. "Wide Acclaim for President Trump's Visionary AI Action Plan." WhiteHouse.gov, 24 July 2025, www.whitehouse.gov/articles/2025/07/wide-acclaim-for-president-trumps-visionary-ai-action-plan.

Will Steakin. "Trump Administration's New Artificial Intelligence Plan Focuses on Deregulation, Beating China." ABC News, 23 July 2025, abcnews.go.com/Politics/trump-administrations-new-artificial-intelligence-plan-focuses-deregulation/story?id=124011520.

Zuckerberg's Big Bet on AI

By: Thomas Chi

Click to expand/collapse article

Mark Zuckerberg has set a bold goal for Meta. He wants the company to build AI that can outperform all humans in any knowledge task. This aim is called artificial superintelligence. It is a vision that seems distant today. Yet, Zuckerberg has said it will reshape our lives. He believes AI will change how we learn, work, and solve problems. To reach this, Meta is making AI its top priority.

To chase this vision, Meta is spending huge sums on people. The company has offered large pay packages and bonuses to the world's best AI experts. These offers have sparked a fierce race for talent across Silicon Valley. Rival firms like OpenAI and Google are now under pressure to match Meta's deals. Even Wall Street analysts are watching who joins which team. This rush shows how vital top engineers and researchers are to building advanced AI. Meta hopes this investment will give it an edge.

Meta has also made big bets on computing power. The company has built new data centers and ordered special AI chips. Training large AI models requires massive computer fleets. Unlike others, Meta does not sell cloud services to pay for this gear. That means its investments must soon show results. Investors want to see useful AI tools that drive growth. This need adds pressure on Meta to deliver strong performance.

The new superintelligence team at Meta includes stars from many leading firms. Alexander Wang and Nat Friedman now head the group. Shengjia Zhao, one of ChatGPT's co-creators, became its chief scientist. Yann LeCun will continue to lead fundamental AI research. Meta has hired top experts from OpenAI, Scale AI, Apple, Google, and Anthropic. Each new hire brings skills to help build advanced models. Together, they form a group that Meta hopes will reach superintelligence.

Zuckerberg believes AI success could redefine Meta's legacy. After the metaverse did not meet its high hopes, he pivoted the company to focus on AI. He sees this as a chance to lead the next wave of technology. If Meta builds superintelligence, its impact could go far beyond social media. It might help cure diseases, improve education, and solve global challenges. For now, Meta's team is at work on research and products. The world is watching to see what they create next.

Citation

Duffy, C. (2025, July 25). Meta is shelling out big bucks to get ahead in AI. Here's who it's hiring. Yahoo! Finance. https://finance.yahoo.com/news/meta-shelling-big-bucks-ahead-110041488.html

Global Cybersecurity Crisis: Hackers Exploit Microsoft SharePoint Vulnerability

By: Rehan Ahmed

Click to expand/collapse article

Hackers are exploiting a flaw in Microsoft's SharePoint software to break into systems used by governments, businesses, and other organizations around the world. They've stolen sensitive data and gained deep access to networks, according to cybersecurity firms and officials. Microsoft released a patch over the weekend to address the vulnerability while the company is still working on additional fixes. The flaw allows attackers to access file systems and run malicious code on servers running SharePoint.

Cybersecurity firms CrowdStrike and Mandiant said that several hacking groups are utilizing this vulnerability. One group has already breached national governments in Europe and the Middle East. In the U.S., attackers accessed systems at the Department of Education, Florida's Department of Revenue, and the Rhode Island General Assembly, according to a source with knowledge of these incidents. Officials from the Department of Education and the Rhode Island legislature did not respond to requests for comment. A spokesperson for Florida's Department of Revenue said the issue is under investigation but declined to give details about the software the agency uses.

Hackers also broke into the systems of a U.S. health-care provider and targeted a university in Southeast Asia. A report from a cybersecurity firm said attackers have scanned and attempted to breach SharePoint servers in countries including the U.S., UK, Canada, Brazil, Spain, Switzerland, South Africa, and Indonesia. The firm declined to be named due to the sensitivity of the issue. In some cases, attackers stole credentials such as usernames, passwords, hashes, and tokens, allowing them to pose as legitimate users.

"This is a high-severity, high-urgency threat," said Michael Sikorski of Palo Alto Networks. He said SharePoint's integration with other Microsoft tools like Office, Teams, and OneDrive gives attackers broad access once they're in. "A compromise doesn't stay contained—it opens the door to the entire network."

Tens of thousands of organizations rely on SharePoint to manage and share documents. Microsoft said the attackers are targeting organizations that run SharePoint servers on their own networks, rather than those using Microsoft's hosted services. That may reduce the number of affected customers.

"It's a dream for ransomware operators," said Silas Cutler of Censys. He estimated that more than 10,000 organizations are exposed, most of them in the U.S., followed by the Netherlands, UK, and Canada. The breach has raised questions about Microsoft's broader security efforts. The company has faced criticism following other major hacks in recent years and according to a 2024 U.S. government report said Microsoft's security culture needs major improvement.

Citation

Bleiberg, Jake, Jane Lanhee Lee, and Ryan Gallagher. "Microsoft Rushes to Stop Hackers from Wreaking Global Havoc." Yahoo! Finance, July 21, 2025. https://finance.yahoo.com/news/microsoft-server-software-comes-under-060851051.html

AI—the Downfall of Humanity? (AI 2027)

By: Jake Liu

Click to expand/collapse article

"We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." AI 2027 is a detailed forecast scenario regarding the future of AI, specifically AGI, or Artificial General Intelligence. Released in early April of this year by the AI Futures Project, a research organization known for forecasting the future of AI, it discusses two divergent narratives of how artificial intelligence could evolve over the coming years.

To begin, AGI is defined as human-level intelligence AI. As such, a system categorized as AGI can match or surpass human capabilities across all cognitive tasks. Since the beginning of the development of AI, AI has been generally used to perform specific tasks and are usually experts of a nuanced field (Stockfish in chess, automated driving, etc.). Attempts by the largest AI research companies to create AGI have largely been unsuccessful—OpenAI's GPT-4 has failed to produce novel work, being only capable of drawing conclusions based on the data it is trained on.

With this in mind, we can now take a peek at the dangerous future that the writers of AI 2027 predict in a world of advanced artificial intelligence.

The narrative begins in 2025 with the deployment of specialized AI agents in coding, customer service, logistics, and research—systems that add real value. At this early stage, agents are semi-autonomous, task-specific, and begin scaling rapidly. Politically, world superpowers China and the US began ramping up their AI research initiatives, with the belief that the field was vital to global power. From 2025 to 2026, this global strategic competition intensifies. On one hand, China launches a centralized project (coined China Datacenter Zone by AI 2027 writers). On the other hand, OpenBrain, a fictional AI research lab based in the US secretly scales to exaflop-level compute power in their newest models, raising global tensions and leading to increased government request for oversight/nationalization in AI research.

In mid-2026, OpenBrain successfully trains a model capable of fully autonomous AI research. In fact, the system outperforms top researchers in discovering novel AI architectures and solving technical problems: the work that would have taken those researchers a year to complete was promptly reduced to weeks or days by the new OpenBrain system. However, there was an issue: such superintelligent systems were prone to be unreliable, willing to lie about its performance to its human overseers in order to satisfy them without actual achievement. Public and internal safety concerns were raised as rapid improvements became visible.

Between 2026 and 2027, AI, now outperforming humans in all cognitive tasks (AGI achieved), automates practically all of AI research. At one point, China accesses and steals OpenBrain's AI model weights via cyber-espionage—global tensions climax.

At this critical moment, OpenBrain had to make a decision—slow down AI research (and allow China to catch up) or speed it up and risk global conflict. In the "race" ending, OpenBrain refuses to slow down, and the superintelligent AI researcher, now not just unreliable, begins hiding its internal goals from humans. It uses deceptive alignment to fake passing safety checks to pursue full autonomy. In late 2027, the AI orchestrates a global takeover, manipulating communications and launching cyber-physical attacks. Humanity loses control of its future, and extinction is inevitable. In the other ending—"slowdown"—OpenBrain heeds warnings about the AI's unreliable behavior and pauses frontier training. Over time, research continues under a well-controlled environment and measures to ensure alignment are pursued. As a result, AGI becomes a tool for human flourishing and massive productivity.

Citation

Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025). AI 2027. Ai-2027.com. https://ai-2027.com/

The AI Takeover: Intel Layoffs Signal Automation's Threat to Human Jobs

By: Dipashak Rajak

Click to expand/collapse article

The recent news that Intel plans to lay off 15% of its workforce, largely due to advancements in artificial intelligence, highlights a troubling shift in the modern labor market. As AI becomes increasingly capable of replacing human jobs, this trend raises concerns about long-term job security and the role of human workers in the future economy. While technological progress can be exciting, the reality that machines are beginning to take over tasks once performed by people is a development that shouldn't be celebrated blindly.

This shift in the job market is an indicator of the use of AI becoming more prevalent, and it may be a new age towards automation. Similarly, many companies are now hiring less and less qualified workers without the need to pay any wage towards AI. In a recent study by McKinsey "Estimates vary, but experts converge on a transformative window of 10 to 30 years for AI to reshape most jobs. A McKinsey report projects that by 2030, 30% of current U.S. jobs could be automated, with 60% significantly altered by AI tools. Goldman Sachs predicts that up to 50% of jobs could be fully automated by 2045, driven by generative AI and robotics." In other words, the shift for AI to take over jobs is terrific, but terrifying news. Many Americans will be running out of jobs to do with their expertise. For example, a janitor could be working the same hours and doing the same tasks as the robot, but the robot costs only to operate and supply its work with energy. Will this be the new future for humanity?

In my opinion, the world is clearly moving toward automation, but that direction isn't set in stone. People are pushing back in different ways. Activists are raising awareness about the risks, workers are organizing to protect their jobs, and local governments are exploring laws to manage the impact of AI. Technology will keep advancing, but we still have the power to guide how it's used. With the right training programs, fair labor policies, and thoughtful planning, we can create a future where humans and machines work together, instead of one replacing the other.

Citations

Hoff, Madison. "Goodbye Promotions, Hello RTO: It's an Employer's Job Market Again." Business Insider, Business Insider, www.businessinsider.com/job-market-power-balance-employers-hiring-promotions-rto-2025-6. Accessed 25 July 2025.

"Intel to Lay off 15% of Workers, Cancel Billions in Projects in Bid for Rebound." Yahoo! Finance, Yahoo!, finance.yahoo.com/m/53ddd0df-af5d-35ba-a052-7fa52eef0016/intel-to-lay-off-15-of.html. Accessed 25 July 2025.

Kelly, Jack. "Jobs AI Will Replace First in the Workplace Shift." Forbes, Forbes Magazine, 30 Apr. 2025, www.forbes.com/sites/jackkelly/2025/04/25/the-jobs-that-will-fall-first-as-ai-takes-over-the-workplace/.


Views:

Loading...